Goto

Collaborating Authors

 biological evolution


Effects of non-uniform number of actions by Hawkes process on spatial cooperation

Miyagawa, Daiki, Ichinose, Genki

arXiv.org Artificial Intelligence

The emergence of cooperative behavior, despite natural selection favoring rational self-interest, presents a significant evolutionary puzzle. Evolutionary game theory elucidates why cooperative behavior can be advantageous for survival. However, the impact of non-uniformity in the frequency of actions, particularly when actions are altered in the short term, has received little scholarly attention. To demonstrate the relationship between the non-uniformity in the frequency of actions and the evolution of cooperation, we conducted multi-agent simulations of evolutionary games. In our model, each agent performs actions in a chain-reaction, resulting in a non-uniform distribution of the number of actions. To achieve a variety of non-uniform action frequency, we introduced two types of chain-reaction rules: one where an agent's actions trigger subsequent actions, and another where an agent's actions depend on the actions of others. Our results revealed that cooperation evolves more effectively in scenarios with even slight non-uniformity in action frequency compared to completely uniform cases. In addition, scenarios where agents' actions are primarily triggered by their own previous actions more effectively support cooperation, whereas those triggered by others' actions are less effective. This implies that a few highly active individuals contribute positively to cooperation, while the tendency to follow others' actions can hinder it.


Building Artificial Intelligence with Creative Agency and Self-hood

Gabora, Liane, Bach, Joscha

arXiv.org Artificial Intelligence

This paper is an invited layperson summary for The Academic of the paper referenced on the last page. We summarize how the formal framework of autocatalytic networks offers a means of modeling the origins of self-organizing, self-sustaining structures that are sufficiently complex to reproduce and evolve, be they organisms undergoing biological evolution, novelty-generating minds driving cultural evolution, or artificial intelligence networks such as large language models. The approach can be used to analyze and detect phase transitions in vastly complex networks that have proven intractable with other approaches, and suggests a promising avenue to building an autonomous, agentic AI self. It seems reasonable to expect that such an autocatalytic AI would possess creative agency akin to that of humans, and undergo psychologically healing -- i.e., therapeutic -- internal transformation through engagement in creative tasks. Moreover, creative tasks would be expected to help such an AI solidify its self-identity.


How the Poverty of the Stimulus Solves the Poverty of the Stimulus

Neural Information Processing Systems

Language acquisition is a special kind of learning problem because the outcome of learning of one generation is the input for the next. That makes it possible for languages to adapt to the particularities of the learner. In this paper, I show that this type of language change has important consequences for models of the evolution and acquisition of syntax. For both artificial systems and non-human animals, learning the syntax of natural languages is a notoriously hard problem. All healthy human infants, in contrast, learn any of the approximately 6000 human languages rapidly, accurately and spon(cid:173) taneously.


Peace on Earth (1987): Using telerobotics to check in on a swarm robot uprising on the Moon

Robohub

Recommendation: Read this classic hard sci-fi novel and expand your horizons about robots, teleoperation, and swarms. Stanislaw Lem was one of the most read science fiction authors in the world in his day, especially the 70s and 80s, though not in America because there were rarely translations from his native Polish to English. Lem famously did not like American science fiction, with a very few exceptions. One being Philip K. Dick- and it is no wonder since Lem's 1987 novel Peace on Earth shares many of the same themes that Dick covered: militarization of robots, people losing their memory or not being what they seem, and government conspiracies. In some ways Peace on Earth is like the longer, more detailed, and, actually, *better* version of Dick's 1953 short story Second Variety (which was basis for the Peter Weller movie Screamers). Peace on Earth has a sort of a Battlestar Galatica (reboot) backstory.


Challenges of Artificial Intelligence -- From Machine Learning and Computer Vision to Emotional Intelligence

Pietikäinen, Matti, Silven, Olli

arXiv.org Artificial Intelligence

Artificial intelligence (AI) has become a part of everyday conversation and our lives. It is considered as the new electricity that is revolutionizing the world. AI is heavily invested in both industry and academy. However, there is also a lot of hype in the current AI debate. AI based on so-called deep learning has achieved impressive results in many problems, but its limits are already visible. AI has been under research since the 1940s, and the industry has seen many ups and downs due to over-expectations and related disappointments that have followed. The purpose of this book is to give a realistic picture of AI, its history, its potential and limitations. We believe that AI is a helper, not a ruler of humans. We begin by describing what AI is and how it has evolved over the decades. After fundamentals, we explain the importance of massive data for the current mainstream of artificial intelligence. The most common representations for AI, methods, and machine learning are covered. In addition, the main application areas are introduced. Computer vision has been central to the development of AI. The book provides a general introduction to computer vision, and includes an exposure to the results and applications of our own research. Emotions are central to human intelligence, but little use has been made in AI. We present the basics of emotional intelligence and our own research on the topic. We discuss super-intelligence that transcends human understanding, explaining why such achievement seems impossible on the basis of present knowledge,and how AI could be improved. Finally, a summary is made of the current state of AI and what to do in the future. In the appendix, we look at the development of AI education, especially from the perspective of contents at our own university.


Towards a Theory of Evolution as Multilevel Learning

Vanchurin, Vitaly, Wolf, Yuri I., Katsnelson, Mikhail I., Koonin, Eugene V.

arXiv.org Artificial Intelligence

We formulate seven fundamental principles of evolution that appear to be necessary and sufficient to render a universe observable and show that they entail the major features of biological evolution, including replication and natural selection. These principles also follow naturally from the theory of learning. We formulate the theory of evolution using the mathematical framework of neural networks, which provides for detailed analysis of evolutionary phenomena. To demonstrate the potential of the proposed theoretical framework, we derive a generalized version of the Central Dogma of molecular biology by analyzing the flow of information during learning (back-propagation) and predicting (forward-propagation) the environment by evolving organisms. The more complex evolutionary phenomena, such as major transitions in evolution, in particular, the origin of life, have to be analyzed in the thermodynamic limit, which is described in detail in the accompanying paper. Significance statement Modern evolutionary theory gives a detailed quantitative description of microevolutionary processes that occur within evolving populations of organisms, but evolutionary transitions and emergence of multiple levels of complexity remain poorly understood. Here we establish correspondence between the key features of evolution, renormalizability of physical theories and learning dynamics, to outline a theory of evolution that strives to incorporate all evolutionary processes within a unified mathematical framework of the theory of learning. Under this theory, for example, natural selection readily arises from the learning dynamics, and in sufficiently complex systems, the same learning phenomena occur on multiple levels or on different scales, similar to the case of renormalizable physical theories.


Thinking Darwinian

#artificialintelligence

Some people have updated other people's views and understanding of life with the new idea they presented. Darwin is undoubtedly one of these people. Darwin's difference from other biologists and researchers is that he explains the evolutionary process in an algorithmic way and bases it on the laws of nature. Darwin's dangerous idea began in biology but has spread from engineering to sociology. There is greatness in this idea to be able to conceive of infinite beauty and complexity.


Induction, Popper, and machine learning

Nielson, Bruce, Elton, Daniel C.

arXiv.org Artificial Intelligence

Francis Bacon popularized the idea that science is based on a process of induction by which repeated observations are, in some unspecified way, generalized to theories based on the assumption that the future resembles the past. This idea was criticized by Hume and others as untenable leading to the famous problem of induction. It wasn't until the work of Karl Popper that this problem was solved, by demonstrating that induction is not the basis for science and that the development of scientific knowledge is instead based on the same principles as biological evolution. Today, machine learning is also taught as being rooted in induction from big data. Solomonoff induction implemented in an idealized Bayesian agent (Hutter's AIXI) is widely discussed and touted as a framework for understanding AI algorithms, even though real-world attempts to implement something like AIXI immediately encounter fatal problems. In this paper, we contrast frameworks based on induction with Donald T. Campbell's universal Darwinism. We show that most AI algorithms in use today can be understood as using an evolutionary trial and error process searching over a solution space. In this work we argue that a universal Darwinian framework provides a better foundation for understanding AI systems. Moreover, at a more meta level the process of development of all AI algorithms can be understood under the framework of universal Darwinism.


Robots may soon be able to reproduce - will this change how we think about evolution? Emma Hart

The Guardian

From the bottom of the oceans to the skies above us, natural evolution has filled our planet with a vast and diverse array of lifeforms, with approximately 8 million species adapted to their surroundings in a myriad of ways. Yet 100 years after Karel Čapek coined the term robot, the functional abilities of many species still surpass the capabilities of current human engineering, which has yet to convincingly develop methods of producing robots that demonstrate human-level intelligence, move and operate seamlessly in challenging environments, and are capable of robust self-reproduction. But could robots ever reproduce? This, undoubtedly, forms a pillar of "life" as shared by all natural organisms. A team of researchers from the UK and the Netherlands have recently demonstrated a fully automated technology to allow physical robots to repeatedly breed, evolving their artificial genetic code over time to better adapt to their environment.


The Singularity Event

#artificialintelligence

According to Ray Kurzweil, "Singularity is a future period in which technology change will be so rapid and its impact so profound, that every aspect of human life will be irreversibly transformed; there won't be a clear distinction between the humans and the machines". He predicts that the computers will not remain as we see them today. They will become a part of human bodies and brains in the years to come until we become the hybrid of biological and non-biological intelligence and its components. If we humans look back into our civilization, as it existed and progressed a few thousand years back, it, in no way, compares with the speed at which we are progressing now. A few thousand years before, we barely existed, let alone, progress.